skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Shamsaei, Nima"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Federated learning (FL) enables multiple participants to train a global machine learning model without sharing their private training data. Peer-to-peer (P2P) FL advances existing centralized FL paradigms by eliminating the server that aggregates local models from participants and then updates the global model. However, P2P FL is vulnerable to (i) honest-but-curious participants whose objective is to infer private training data of other participants, and (ii) Byzantine participants who can transmit arbitrarily manipulated local models to corrupt the learning process. P2P FL schemes that simultaneously guarantee Byzantine resilience and preserve privacy have been less studied. In this paper, we develop Brave, a protocol that ensures Byzantine Resilience And priVacy-prEserving property for P2P FL in the presence of both types of adversaries. We show that Brave preserves privacy by establishing that any honest-but-curious adversary cannot infer other participants’ private data by observing their models. We further prove that Brave is Byzantine-resilient, which guarantees that all benign participants converge to an identical model that deviates from a global model trained without Byzantine adversaries by a bounded distance. We evaluate Brave against three state-of-the-art adversaries on a P2P FL for image classification tasks on benchmark datasets CIFAR10 and MNIST. Our results show that global models learned with Brave in the presence of adversaries achieve comparable classification accuracy to global models trained in the absence of any adversary. 
    more » « less
  2. As large language models (LLMs) become increasingly integrated into real-world applications such as code generation and chatbot assistance, extensive efforts have been made to align LLM behavior with human values, including safety. Jailbreak attacks, aiming to provoke unintended and unsafe behaviors from LLMs, remain a significant LLM safety threat. In this paper, we aim to defend LLMs against jailbreak attacks by introducing SafeDecoding, a safety-aware decoding strategy for LLMs to generate helpful and harmless responses to user queries. Our insight in developing SafeDecoding is based on the observation that, even though probabilities of tokens representing harmful contents outweigh those representing harmless responses, safety disclaimers still appear among the top tokens after sorting tokens by probability in descending order. This allows us to mitigate jailbreak attacks by identifying safety disclaimers and amplifying their token probabilities, while simultaneously attenuating the probabilities of token sequences that are aligned with the objectives of jailbreak attacks. We perform extensive experiments on five LLMs using six state-of-the-art jailbreak attacks and four benchmark datasets. Our results show that SafeDecoding significantly reduces attack success rate and harmfulness of jailbreak attacks without compromising the helpfulness of responses to benign user queries while outperforming six defense methods. Our code is publicly available at: https://github.com/uw-nsl/SafeDecoding 
    more » « less
  3. Free, publicly-accessible full text available March 1, 2026
  4. Free, publicly-accessible full text available September 1, 2026
  5. Free, publicly-accessible full text available January 1, 2026
  6. This paper presents a search for massive, charged, long-lived particles with the ATLAS detector at the Large Hadron Collider using an integrated luminosity of $$140~fb^{−1}$$ of proton-proton collisions at $$\sqrt{s}=13$$~TeV. These particles are expected to move significantly slower than the speed of light. In this paper, two signal regions provide complementary sensitivity. In one region, events are selected with at least one charged-particle track with high transverse momentum, large specific ionisation measured in the pixel detector, and time of flight to the hadronic calorimeter inconsistent with the speed of light. In the other region, events are selected with at least two tracks of opposite charge which both have a high transverse momentum and an anomalously large specific ionisation. The search is sensitive to particles with lifetimes greater than about 3 ns with masses ranging from 200 GeV to 3 TeV. The results are interpreted to set constraints on the supersymmetric pair production of long-lived R-hadrons, charginos and staus, with mass limits extending beyond those from previous searches in broad ranges of lifetime 
    more » « less
    Free, publicly-accessible full text available July 1, 2026
  7. Free, publicly-accessible full text available December 1, 2025